Non-sparse Multiple Kernel Learning

نویسندگان

  • Marius Kloft
  • Ulf Brefeld
  • Pavel Laskov
  • Sören Sonnenburg
چکیده

Approaches to multiple kernel learning (MKL) employ l1-norm constraints on the mixing coefficients to promote sparse kernel combinations. When features encode orthogonal characterizations of a problem, sparseness may lead to discarding useful information and may thus result in poor generalization performance. We study non-sparse multiple kernel learning by imposing an l2-norm constraint on the mixing coefficients. Empirically, l2-MKL proves robust against noisy and redundant feature sets and significantly improves the promoter detection rate compared to l1-norm and canonical MKL on large scales.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse and Non-sparse Multiple Kernel Learning for Recognition

The development of Multiple Kernel Techniques has become of particular interest for machine learning researchers in Computer Vision topics like image processing, object classification, and object state recognition. Sparsity-inducing norms along with non-sparse formulations promote different degrees of sparsity at the kernel coefficient level, at the same time permitting non-sparse combination w...

متن کامل

Multiple Kernel Learning for Object Classification

Combining information from various image descriptors has become a standard technique for image classification tasks. Multiple kernel learning (MKL) approaches allow to determine the optimal combination of such similarity matrices and the optimal classifier simultaneously. Most MKL approaches employ an `-regularization on the mixing coefficients to promote sparse solutions; an assumption that is...

متن کامل

Non-Sparse Multiple Kernel Fisher Discriminant Analysis

Sparsity-inducing multiple kernel Fisher discriminant analysis (MK-FDA) has been studied in the literature. Building on recent advances in non-sparse multiple kernel learning (MKL), we propose a non-sparse version of MK-FDA, which imposes a general lp norm regularisation on the kernel weights. We formulate the associated optimisation problem as a semi-infinite program (SIP), and adapt an iterat...

متن کامل

lp-Norm Multiple Kernel Learning

Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this l1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtur...

متن کامل

`p-Norm Multiple Kernel Learning

Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtur...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008